ADMM for Convex Quadratic Programs: Linear Convergence and Infeasibility Detection

نویسندگان

  • Arvind U. Raghunathan
  • Stefano Di Cairano
چکیده

In this paper, we analyze the convergence of Alternating Direction Method of Multipliers (ADMM) on convex quadratic programs (QPs) with linear equality and bound constraints. The ADMM formulation alternates between an equality constrained QP and a projection on the bounds. Under the assumptions of: (i) positive definiteness of the Hessian of the objective projected on the null space of equality constraints (reduced Hessian), and (ii) linear independence constraint qualification holding at the optimal solution we derive an upper bound on the rate of convergence to the solution at each iteration. In particular, we provide an explicit characterization of the rate of convergence in terms of: (a) the eigenvalues of the reduced Hessian, (b) the cosine of the Friedrichs angle between the subspace spanned by equality constraints and the subspace spanned by the gradients of the components that are active at the solution and (c) the distance of the inactive components of solution from the bounds. Using this analysis we show that if the QP is feasible, the iterates converge at a Q-linear rate and prescribe an optimal setting for the ADMM step-size parameter. For infeasible QPs, we show that the primal variables in ADMM converge to minimizers of the Euclidean distance between the hyperplane defined by the equality constraints and the convex set defined by the bounds. The multipliers for the bound constraints are shown to diverge along the range space of the equality constraints. Using this characterization, we also propose a termination criterion for ADMM. Numerical examples are provided to illustrate the theory through experiments.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Infeasibility detection in the alternating direction method of multipliers for convex optimization

The alternating direction method of multipliers (ADMM) is a powerful operator splitting technique for solving structured optimization problems. For convex optimization problems, it is well-known that the iterates generated by ADMM converge to a solution provided that it exists. If a solution does not exist, then the ADMM iterates do not converge. Nevertheless, we show that the ADMM iterates yie...

متن کامل

Linear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Programming*

In this paper, we aim to prove the linear rate convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex composite optimization problems. Under a mild calmness condition, which holds automatically for convex composite piecewise linear-quadratic programming, we establish the global Q-linear rate of convergence for a general semi-proximal ADMM w...

متن کامل

Local Linear Convergence of ADMM on Quadratic or Linear Programs

In this paper, we analyze the convergence of the Alternating Direction Method of Multipliers (ADMM) as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits almost monotonic convergence. We present an analysis which indicates the convergence depends on the e...

متن کامل

On the Convergence of Multi-Block Alternating Direction Method of Multipliers and Block Coordinate Descent Method

The paper answers several open questions of the alternating direction method of multipliers (ADMM) and the block coordinate descent (BCD) method that are now wildly used to solve large scale convex optimization problems in many fields. For ADMM, it is still lack of theoretical understanding of the algorithm when the objective function is not separable across the variables. In this paper, we ana...

متن کامل

Linear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Quadratic and Semi-Definite Programming

In this paper, we aim to provide a comprehensive analysis on the linear rate convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex composite optimization problems. Under a certain error bound condition, we establish the global linear rate of convergence for a more general semi-proximal ADMM with the dual steplength being restricted to be i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015